Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update

نویسندگان

  • Yan Xiong
  • Wei Wu
  • Xidai Kang
  • Chao Zhang
چکیده

A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in training, resulting in a very slow convergence. To overcome this difficulty, we introduce an adaptive penalty term into the error function, so as to increase the magnitude of the update increment of the weights when it is too small. This strategy brings about faster convergence as shown by the numerical experiments carried out in this letter.

منابع مشابه

Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms

This paper investigates an online gradient method with innerpenalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higherorder networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performan...

متن کامل

B Atch G Radient M Ethod for T Raining of P I - Sigma N Eural N Etwork with P Enalty

In this letter, we describe a convergence of batch gradient method with a penalty condition term for a narration feed forward neural network called pi-sigma neural network, which employ product cells as the output units to inexplicit amalgamate the capabilities of higher-order neural networks while using a minimal number of weights and processing units. As a rule, the penalty term is condition ...

متن کامل

Convergence of Online Gradient Method with a Penalty Term for Feedforward Neural Networks with Stochastic

Abstract Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaran...

متن کامل

Convergence of an Online Gradient Algorithm with Penalty for Two-layer Neural Networks

Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error ...

متن کامل

Adaptive critic for sigma-pi networks

-This article presents an investigation which studied how training o f sigma-pi networks with the associative reward-penalty ( A R-p ) regime may be enhanced by using two networks in parallel. The technique uses what has been termed an unsupervised "'adaptive critic element" (ACE) to give critical advice to the supervised sigma-pi network. We utilise the conventions that the sigma-pi neuron mod...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

متن کامل
عنوان ژورنال:
  • Neural computation

دوره 19 12  شماره 

صفحات  -

تاریخ انتشار 2007